Deep learning models for learning analytics have become increasingly popular over the last few years; however, these approaches are still not widely adopted in real-world settings, likely due to a lack of trust and transparency. In this paper, we tackle this issue by implementing explainable AI methods for black-box neural networks. This work focuses on the context of online and blended learning and the use case of student success prediction models. We use a pairwise study design, enabling us to investigate controlled differences between pairs of courses. Our analyses cover five course pairs that differ in one educationally relevant aspect and two popular instance-based explainable AI methods (LIME and SHAP). We quantitatively compare the distances between the explanations across courses and methods. We then validate the explanations of LIME and SHAP with 26 semi-structured interviews of university-level educators regarding which features they believe contribute most to student success, which explanations they trust most, and how they could transform these insights into actionable course design decisions. Our results show that quantitatively, explainers significantly disagree with each other about what is important, and qualitatively, experts themselves do not agree on which explanations are most trustworthy. All code, extended results, and the interview protocol are provided at https://github.com/epfl-ml4ed/trusting-explainers.
translated by 谷歌翻译
Student success models might be prone to develop weak spots, i.e., examples hard to accurately classify due to insufficient representation during model creation. This weakness is one of the main factors undermining users' trust, since model predictions could for instance lead an instructor to not intervene on a student in need. In this paper, we unveil the need of detecting and characterizing unknown unknowns in student success prediction in order to better understand when models may fail. Unknown unknowns include the students for which the model is highly confident in its predictions, but is actually wrong. Therefore, we cannot solely rely on the model's confidence when evaluating the predictions quality. We first introduce a framework for the identification and characterization of unknown unknowns. We then assess its informativeness on log data collected from flipped courses and online courses using quantitative analyses and interviews with instructors. Our results show that unknown unknowns are a critical issue in this domain and that our framework can be applied to support their detection. The source code is available at https://github.com/epfl-ml4ed/unknown-unknowns.
translated by 谷歌翻译
Time series is the most prevalent form of input data for educational prediction tasks. The vast majority of research using time series data focuses on hand-crafted features, designed by experts for predictive performance and interpretability. However, extracting these features is labor-intensive for humans and computers. In this paper, we propose an approach that utilizes irregular multivariate time series modeling with graph neural networks to achieve comparable or better accuracy with raw time series clickstreams in comparison to hand-crafted features. Furthermore, we extend concept activation vectors for interpretability in raw time series models. We analyze these advances in the education domain, addressing the task of early student performance prediction for downstream targeted interventions and instructional support. Our experimental analysis on 23 MOOCs with millions of combined interactions over six behavioral dimensions show that models designed with our approach can (i) beat state-of-the-art educational time series baselines with no feature extraction and (ii) provide interpretable insights for personalized interventions. Source code: https://github.com/epfl-ml4ed/ripple/.
translated by 谷歌翻译
自然语言处理(NLP)已越来越多地用于提供教育应用的适应性。但是,最近的研究突出了预训练的语言模型中的各种偏见。尽管现有研究调查了不同领域的偏见,但它们在解决有关教育和多语言语料库的细粒度分析方面受到限制。在这项工作中,我们通过在五年内从大学生收集的9,165个德国同行评审的语料库中分析了跨文本和多个架构的偏见。值得注意的是,我们的语料库包括来自同行评审接收者以及人口统计属性的帮助,质量和关键方面等级等标签。我们对(1)与聚类标签有关的(2)最常见的预训练的德语模型(T5,BERT和GPT-2)和Glove Embeddings进行了单词嵌入关联测试(WEAT)测试(WEAT)分析(1)我们收集的语料库,以及(3)对我们收集的数据集进行微调后的语言模型。与我们的最初期望相反,我们发现我们收集的语料库在共同出现分析或手套嵌入中没有揭示许多偏见。但是,预先训练的德语模型发现了实质性的概念,种族和性别偏见,并且在同行评审数据的微调过程中,概念和种族轴之间的偏见发生了重大变化。通过我们的研究,我们的目标是通过新颖的数据集,对自然语言教育数据的偏见的理解以及不抵消语言模型中的教育任务偏见的潜在危害,为第四联合国的可持续发展目标(质量教育)做出贡献。
translated by 谷歌翻译
互动模拟使学生可以通过自己的探索来发现科学现象的基本原理。不幸的是,学生经常在这些环境中有效地学习。根据他们的预期表现对学生的互动数据进行分类,有可能实现自适应指导并因此改善学生的学习。该领域的先前研究主要集中于A-tosteriori分析或研究限于一个特定的预测模型和仿真。在本文中,我们研究了模型的质量和普遍性,以根据跨交互式仿真的学生的点击数据进行概念性理解的早期预测。我们首先通过他们的任务表现来衡量学生的概念理解。然后,我们建议一种新型的功能,该功能从ClickStream数据开始,既编码仿真的状态和学生执行的动作。我们最终建议将这些功能馈送到基于GRU的模型中,有或没有注意力进行预测。在两个不同的模拟上进行的实验和两个不同的人群表明,我们提出的模型的表现优于浅层学习基准,并更好地推广到不同的学习环境和人群。将注意力包括在模型中可以提高有效的查询。源代码可在GitHub(https://github.com/epfl-ml4ed/beerslaw-lab.git)上获得。
translated by 谷歌翻译
神经网络无处不在用于教育的应用机器学习。他们在预测性能方面的普遍成功伴随着严重的弱点,缺乏决策的解释性,尤其是在以人为中心的领域中。我们实施了五种最先进的方法,用于解释黑盒机器学习模型(Lime,PermiputationShap,kernelshap,dice,CEM),并检查每种方法的优势在学生绩效预测的下游任务上,用于五个大规模开放的在线在线公开培训班。我们的实验表明,解释者的家属在与同一代表学生集的同一双向LSTM模型中相互重要性不同意。我们使用主成分分析,詹森 - 香农距离以及Spearman的等级相关性,以跨方法和课程进行定量的盘问解释。此外,我们验证了基于课程的先决条件之间的解释器表现。我们的结果得出的结论是,解释器的选择是一个重要的决定,实际上对预测结果的解释至关重要,甚至比模型的课程更重要。源代码和模型在http://github.com/epfl-ml4ed/evaluating-explainers上发布。
translated by 谷歌翻译
Attention-based multiple instance learning (AMIL) algorithms have proven to be successful in utilizing gigapixel whole-slide images (WSIs) for a variety of different computational pathology tasks such as outcome prediction and cancer subtyping problems. We extended an AMIL approach to the task of survival prediction by utilizing the classical Cox partial likelihood as a loss function, converting the AMIL model into a nonlinear proportional hazards model. We applied the model to tissue microarray (TMA) slides of 330 lung cancer patients. The results show that AMIL approaches can handle very small amounts of tissue from a TMA and reach similar C-index performance compared to established survival prediction methods trained with highly discriminative clinical factors such as age, cancer grade, and cancer stage
translated by 谷歌翻译
求职面试通常是高风险的社交场所,需要专业和行为技巧才能令人满意。专业的工作面试培训师会根据公共标准提供有关显示行为的教育反馈。对于提高工作面试所需的行为技能,这种反馈可能会有所帮助。产生此类反馈的技术方法可能是工作面试培训的嬉戏且低调的起点。因此,我们通过基于生成的对抗网络(GAN)的方法扩展了交互式虚拟工作面试培训系统,该方法首先检测到行为弱点并随后产生个性化的反馈。为了评估生成的反馈的有用性,我们使用求职培训系统的模型进行了一项混合方法试点研究。总体研究结果表明,基于GAN的产生的行为反馈很有帮助。此外,参与者评估反馈将改善他们的工作面试绩效。
translated by 谷歌翻译
即使多模式的多目标多目标进化算法(MMOEA)旨在找到良好的解决方案,分布在所有本地最佳近似值集的多模式多模式多目标优化问题(MMOP)中,发现发现的风险是,发现的一组是一组。解决方案无法平稳导航,因为该解决方案属于各种壁ni,从而减少了决策者的见解。为了解决此问题,提出了一个新的mmoeA:多模式的B \'Ezier进化算法(MM-BEZEA),该算法产生近似集,涵盖单个利基市场并表现出固有的决策空间平稳性,因为它们由B \'参数化。Ezier曲线。MM-BEZEA结合了最近引入的bezea和Mo-Hillvallea背后的概念,以找到所有本地最佳近似集。当用线性帕累托套件上的MMOP上的MMOEAS MO_RING_PSO_SCD和MO-HILLVALLEA进行基准测试时,发现MM-BEZEA在最佳的HyperVolume方面表现最好。
translated by 谷歌翻译
There is an increasing interest in developing artificial intelligence (AI) systems to process and interpret electronic health records (EHRs). Natural language processing (NLP) powered by pretrained language models is the key technology for medical AI systems utilizing clinical narratives. However, there are few clinical language models, the largest of which trained in the clinical domain is comparatively small at 110 million parameters (compared with billions of parameters in the general domain). It is not clear how large clinical language models with billions of parameters can help medical AI systems utilize unstructured EHRs. In this study, we develop from scratch a large clinical language model - GatorTron - using >90 billion words of text (including >82 billion words of de-identified clinical text) and systematically evaluate it on 5 clinical NLP tasks including clinical concept extraction, medical relation extraction, semantic textual similarity, natural language inference (NLI), and medical question answering (MQA). We examine how (1) scaling up the number of parameters and (2) scaling up the size of the training data could benefit these NLP tasks. GatorTron models scale up the clinical language model from 110 million to 8.9 billion parameters and improve 5 clinical NLP tasks (e.g., 9.6% and 9.5% improvement in accuracy for NLI and MQA), which can be applied to medical AI systems to improve healthcare delivery. The GatorTron models are publicly available at: https://catalog.ngc.nvidia.com/orgs/nvidia/teams/clara/models/gatortron_og.
translated by 谷歌翻译